Safe Distillation Box
نویسندگان
چکیده
Knowledge distillation (KD) has recently emerged as a powerful strategy to transfer knowledge from pre-trained teacher model lightweight student, and demonstrated its unprecedented success over wide spectrum of applications. In spite the encouraging results, KD process \emph{per se} poses potential threat network ownership protection, since contained in can be effortlessly distilled hence exposed malicious user. this paper, we propose novel framework, termed Safe Distillation Box~(SDB), that allows us wrap virtual box for intellectual property protection. Specifically, SDB preserves inference capability wrapped all users, but precludes unauthorized users. For authorized on other hand, carries out augmentation scheme strengthen performances results student model. words, users may employ inference, only get access The proposed imposes no constraints architecture, readily serve plug-and-play solution protect network. Experiments across various datasets architectures demonstrate that, with SDB, performance an drops significantly while gets enhanced, demonstrating effectiveness SDB.
منابع مشابه
Representation inheritance: a safe form of "white box" code inheritance
Inheritance as a programming language mechanism can be used to achieve several di erent goals, both in terms of expressing relationships between components and in terms of de ning new components \by di erence" from existing ones. For de ning new component implementations in terms of existing implementations, there are several approaches to using \code inheritance." Black box code inheritance al...
متن کاملAuditing Black-Box Models Using Transparent Model Distillation With Side Information
Black-box risk scoring models permeate our lives, yet are typically proprietary or opaque. We propose a transparent model distillation approach to audit such models. Model distillation was first introduced to transfer knowledge from a large, complex teacher model to a faster, simpler student model without significant loss in prediction accuracy. To this we add a third criterion transparency. To...
متن کاملDropout distillation
Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for t...
متن کاملPolicy Distillation
Policies for complex visual tasks have been successfully learned with deep reinforcement learning, using an approach called deep Q-networks (DQN), but relatively large (task-specific) networks and extensive training are needed to achieve good performance. In this work, we present a novel method called policy distillation that can be used to extract the policy of a reinforcement learning agent a...
متن کاملDistillation Startup of Fully Thermally Coupled Distillation Columns: Theoretical Examinations
The fully thermally coupled distillation column offers an alternative to conventional distillation towers, with the possibility of savings in both energy and capital costs. This innovative and promising alternative provides the opportunity to separate a multicomponent mixture into fractions with high purities merely in one column. A lack of knowledge still exists when dealing with the startup o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2022
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v36i3.20219